skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Gabbard, Joseph"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Emergency response (ER) workers perform extremely demanding physical and cognitive tasks that can result in serious injuries and loss of life. Human augmentation technologies have the potential to enhance physical and cognitive work-capacities, thereby dramatically transforming the landscape of ER work, reducing injury risk, improving ER, as well as helping attract and retain skilled ER workers. This opportunity has been significantly hindered by the lack of high-quality training for ER workers that effectively integrates innovative and intelligent augmentation solutions. Hence, new ER learning environments are needed that are adaptive, affordable, accessible, and continually available for reskilling the ER workforce as technological capabilities continue to improve. This article presents the research considerations in the design and integration of use-inspired exoskeletons and augmented reality technologies in ER processes and the identification of unique cognitive and motor learning needs of each of these technologies in context-independent and ER-relevant scenarios. We propose a human-centered artificial intelligence (AI) enabled training framework for these technologies in ER. Finally, how these human-centered training requirements for nascent technologies are integrated in an intelligent tutoring system that delivers across tiered access levels, covering the range of virtual, to mixed, to physical reality environments, is discussed. 
    more » « less
  2. null (Ed.)
    Objective We controlled participants’ glance behavior while using head-down displays (HDDs) and head-up displays (HUDs) to isolate driving behavioral changes due to use of different display types across different driving environments. Background Recently, HUD technology has been incorporated into vehicles, allowing drivers to, in theory, gather display information without moving their eyes away from the road. Previous studies comparing the impact of HUDs with traditional displays on human performance show differences in both drivers’ visual attention and driving performance. Yet no studies have isolated glance from driving behaviors, which limits our ability to understand the cause of these differences and resulting impact on display design. Method We developed a novel method to control visual attention in a driving simulator. Twenty experienced drivers sustained visual attention to in-vehicle HDDs and HUDs while driving in both a simple straight and empty roadway environment and a more realistic driving environment that included traffic and turns. Results In the realistic environment, but not the simpler environment, we found evidence of differing driving behaviors between display conditions, even though participants’ glance behavior was similar. Conclusion Thus, the assumption that visual attention can be evaluated in the same way for different types of vehicle displays may be inaccurate. Differences between driving environments bring the validity of testing HUDs using simplistic driving environments into question. Application As we move toward the integration of HUD user interfaces into vehicles, it is important that we develop new, sensitive assessment methods to ensure HUD interfaces are indeed safe for driving. 
    more » « less
  3. null (Ed.)
    Augmented-Reality (AR) head-up display (HUD) is one of the promising solutions to reduce distraction potential while driving and performing secondary visual tasks; however, we currently don’t know how to effectively evaluate interfaces in this area. In this study, we show that current visual distraction standards for evaluating in-vehicle displays may not be applicable for AR HUDs. We provide evidence that AR HUDs can afford longer glances with no decrement in driving performance. We propose that the selection of measurement methods for driver distraction research should be guided not only by the nature of the task under evaluation but also by the properties of the method itself. 
    more » « less
  4. In the present paper, we present a user study with an advanced-driver assistance system (ADAS) using augmented reality (AR) cues to highlight pedestrians and vehicles when approaching intersections of varying complexity. Our major goal is to understand the relationship between the presence and absence of AR, driver-initiated takeover rates and glance behavior when using a SAE Level 2 autonomous vehicle. Therefore, a user-study with eight participants on a medium-fidelity driving simulator was carried out. Overall, we found that AR cues can provide promising means to increase the system transparency, drivers’ situation awareness and trust in the system. Yet, we suggest that the dynamic glance allocation of attention during partially automated vehicles is still challenging for researchers as we still have much to understand and explore when AR cues become a distractor instead of an attention guider. 
    more » « less
  5. null (Ed.)
    When navigating via car, developing robust mental representations (spatial knowledge) of the environment is crucial in situations where technology fails, or we need to find locations not included in a navigation system’s database. In this work, we present a study that examines how screen-relative and world-relative augmented reality (AR) head-up display interfaces affect drivers’ glance behavior and spatial knowledge acquisition. Results showed that both AR interfaces have similar impact on the levels of spatial knowledge acquired. However, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR interfaces; with conformal-graphics demanding more visual attention from drivers. 
    more » « less
  6. The use of augmented reality (AR) with semi-autonomous aerial systems in civil infrastructure inspection offers an extension of human capabilities by enhancing their ability to access hard-to-reach areas, decreasing the physical requirements needed to complete the task, and augmenting their visual field of view with useful information. Still unknown though is how helpful AR visual aids may be when they are imperfect and provide the user with erroneous data. A total of 28 participants flew as an autonomous drone around a simulated bridge in a virtual reality environment and participated in a target detection task. In this study, we analyze the effect of AR cue type across discrete levels of target saliency by measuring performance in a signal detection task. Results showed significant differences in false alarm rates in the different target salience conditions but no significant differences across AR cue types (none, bounding box, corner-bound box, and outline) in terms of hits and misses. 
    more » « less
  7. The use of augmented reality (AR) with drones in infrastructure inspection can increase human capabilities by helping workers access hard-to-reach areas and supplementing their field of view with useful information. Still unknown though is how these aids impact performance when they are imperfect. A total of 28 participants flew as an autonomous drone while completing a target detection task around a simulated bridge. Results indicated significant differences between cued and un-cued trials but not between the four cue types: none, bounding box, corner-bound box, and outline. Differences in trust amongst the four cues indicate that participants may trust some cue styles more than others. 
    more » « less
  8. The aim of this work is to examine how augmented reality (AR) head worn displays (HWDs) influence worker task performance in comparison to traditional paper blueprints when assembling three various sized wooden frame walls. In our study, 18 participants assembled three different sized frames using one of the three display conditions (conformal AR interface, tag-along AR interface, and paper blueprints). Results indicate that for large frame assembly, the conformal AR interface reduced assembly errors, yet there were no differences in assembly times between display conditions. Additionally, traditional paper blueprints resulted in significantly faster assembly time for small frame assembly. 
    more » « less